neural network model
Pruning neural network models for gene regulatory dynamics using data and domain knowledge
It is common to assess a model's merit for scientific discovery, and thus novel insights, by how well it aligns with already available domain knowledge - a dimension that is currently largely disregarded in the comparison of neural network models. While pruning can simplify deep neural network architectures and excels in identifying sparse models, as we show in the context of gene regulatory network inference, state-of-the-art techniques struggle with biologically meaningful structure learning. To address this issue, we propose DASH, a generalizable framework that guides network pruning by using domain-specific structural information in model fitting and leads to sparser, better interpretable models that are more robust to noise. Using both synthetic data with ground truth information, as well as real-world gene expression data, we show that DASH, using knowledge about gene interaction partners within the putative regulatory network, outperforms general pruning methods by a large margin and yields deeper insights into the biological systems being studied.
Flexible mapping of abstract domains by grid cells via self-supervised extraction and projection of generalized velocity signals
Grid cells in the medial entorhinal cortex create remarkable periodic maps of explored space during navigation. Recent studies show that they form similar maps of abstract cognitive spaces. Examples of such abstract environments include auditory tone sequences in which the pitch is continuously varied or images in which abstract features are continuously deformed (e.g., a cartoon bird whose legs stretch and shrink).
H-Mem: Harnessing synaptic plasticity with Hebbian Memory Networks
The ability to base current computations on memories from the past is critical for many cognitive tasks such as story understanding. Hebbian-type synaptic plasticity is believed to underlie the retention of memories over medium and long time scales in the brain. However, it is unclear how such plasticity processes are integrated with computations in cortical networks. Here, we propose Hebbian Memory Networks (H-Mems), a simple neural network model that is built around a core hetero-associative network subject to Hebbian plasticity. We show that the network can be optimized to utilize the Hebbian plasticity processes for its computations. H-Mems can one-shot memorize associations between stimulus pairs and use these associations for decisions later on. Furthermore, they can solve demanding question-answering tasks on synthetic stories. Our study shows that neural network models are able to enrich their computations with memories through simple Hebbian plasticity processes.
Contrast transfer functions help quantify neural network out-of-distribution generalization in HRTEM
DaCosta, Luis Rangel, Scott, Mary C.
Neural networks, while effective for tackling many challengi ng scientific tasks, are not known to perform well out-of-distribution (OOD), i.e., within domains which d iffer from their training data. Understanding neural network OOD generalization is paramount to their suc cessful deployment in experimental workflows, especially when ground-truth knowledge about the experime nt is hard to establish or experimental conditions significantly vary. With inherent access to ground-truth in formation and fine-grained control of underlying distributions, simulation-based data curation facilitate s precise investigation of OOD generalization behavior. Here, we probe generalization with respect to imaging condi tions of neural network segmentation models for high-resolution transmission electron microscopy (HRTEM) imaging of nanoparticles, training and measuring the OOD generalization of over 12,000 neural networks using synthetic data generated via random structure sampling and multislice simulation. Using the HRTEM contra st transfer function, we further develop a framework to compare information content of HRTEM datasets an d quantify OOD domain shifts. We demonstrate that neural network segmentation models enjoy significant performance stability, but will smoothly and predictably worsen as imaging conditions shift from the training distribution. Lastly, we consider limitations of our approach in explaining other OOD shifts, s uch as of the atomic structures, and discuss complementary techniques for understanding generalizatio n in such settings.
- North America > United States > California > Alameda County > Berkeley (0.14)
- North America > United States > New York (0.04)
- North America > United States > Louisiana > Orleans Parish > New Orleans (0.04)
- (4 more...)
- Energy (0.46)
- Government (0.46)
- North America > United States > California > Santa Clara County > Palo Alto (0.04)
- North America > United States > California > Los Angeles County > Long Beach (0.04)
- Health & Medicine > Therapeutic Area > Oncology > Carcinoma (0.97)
- Health & Medicine > Therapeutic Area > Endocrinology (0.94)
A Model Zoo Generation Details
In our model zoos, we use three architectures. The code to generate the models can be found on www.modelzoos.cc. Data Management and Documentation: To ensure that every zoo is reproducible, expandable, and understandable, we document each zoo. For each zoo, a Readme file is generated, displaying basic information about the zoo. A second json file contains the the performance metrics during training.